A Randomized Stochastic Optimization Algorithm: Its Estimation Accuracy
نویسندگان
چکیده
—For a randomized stochastic optimization algorithm, consistency conditions of estimates are slackened and the order of accuracy for a finite number of observations is studied. A new method of realization of this algorithm on quantum computers is developed.
منابع مشابه
Randomized Smoothing for Stochastic Optimization
We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our k...
متن کاملHarmonics Estimation in Power Systems using a Fast Hybrid Algorithm
In this paper a novel hybrid algorithm for harmonics estimation in power systems is proposed. The estimation of the harmonic components is a nonlinear problem due to the nonlinearity of phase of sinusoids in distorted waveforms. Most researchers implemented nonlinear methods to extract the harmonic parameters. However, nonlinear methods for amplitude estimation increase time of convergence. Hen...
متن کاملA HYBRID SUPPORT VECTOR REGRESSION WITH ANT COLONY OPTIMIZATION ALGORITHM IN ESTIMATION OF SAFETY FACTOR FOR CIRCULAR FAILURE SLOPE
Slope stability is one of the most complex and essential issues for civil and geotechnical engineers, mainly due to life and high economical losses resulting from these failures. In this paper, a new approach is presented for estimating the Safety Factor (SF) for circular failure slope using hybrid support vector regression (SVR) and Ant Colony Optimization (ACO). The ACO is combined with the S...
متن کاملNonconvex Sparse Learning via Stochastic Optimization with Progressive Variance Reduction
We propose a stochastic variance reduced optimization algorithm for solving sparse learning problems with cardinality constraints. Sufficient conditions are provided, under which the proposed algorithm enjoys strong linear convergence guarantees and optimal estimation accuracy in high dimensions. We further extend the proposed algorithm to an asynchronous parallel variant with a near linear spe...
متن کاملStochastic Variance Reduced Optimization for Nonconvex Sparse Learning
We propose a stochastic variance reduced optimization algorithm for solving a class of large-scale nonconvex optimization problems with cardinality constraints. Theoretically, we provide sufficient conditions under which the proposed algorithm enjoys strong linear convergence guarantees and optimal estimation accuracy in high dimensions. We further extend the analysis to its asynchronous varian...
متن کامل